Accurate travel time estimation is paramount for providing transit users with reliable schedules and dependable real-time information. This paper is the first to utilize roadside urban imagery for direct transit travel time prediction. We propose and evaluate an end-to-end framework integrating traditional transit data sources with a roadside camera for automated roadside image data acquisition, labeling, and model training to predict transit travel times across a segment of interest. First, we show how the GTFS real-time data can be utilized as an efficient activation mechanism for a roadside camera unit monitoring a segment of interest. Second, AVL data is utilized to generate ground truth labels for the acquired images based on the observed transit travel time percentiles across the camera-monitored segment during the time of image acquisition. Finally, the generated labeled image dataset is used to train and thoroughly evaluate a Vision Transformer (ViT) model to predict a discrete transit travel time range (band). The results illustrate that the ViT model is able to learn image features and contents that best help it deduce the expected travel time range with an average validation accuracy ranging between 80%-85%. We assess the interpretability of the ViT model's predictions and showcase how this discrete travel time band prediction can subsequently improve continuous transit travel time estimation. The workflow and results presented in this study provide an end-to-end, scalable, automated, and highly efficient approach for integrating traditional transit data sources and roadside imagery to improve the estimation of transit travel duration. This work also demonstrates the value of incorporating real-time information from computer-vision sources, which are becoming increasingly accessible and can have major implications for improving operations and passenger real-time information.
translated by 谷歌翻译
Hyperparameter optimization (HPO) is essential for the better performance of deep learning, and practitioners often need to consider the trade-off between multiple metrics, such as error rate, latency, memory requirements, robustness, and algorithmic fairness. Due to this demand and the heavy computation of deep learning, the acceleration of multi-objective (MO) optimization becomes ever more important. Although meta-learning has been extensively studied to speedup HPO, existing methods are not applicable to the MO tree-structured parzen estimator (MO-TPE), a simple yet powerful MO-HPO algorithm. In this paper, we extend TPE's acquisition function to the meta-learning setting, using a task similarity defined by the overlap in promising domains of each task. In a comprehensive set of experiments, we demonstrate that our method accelerates MO-TPE on tabular HPO benchmarks and yields state-of-the-art performance. Our method was also validated externally by winning the AutoML 2022 competition on "Multiobjective Hyperparameter Optimization for Transformers".
translated by 谷歌翻译
控制器区域网络(CAN)协议的入侵检测需要现代方法才能与其他电气体系结构竞争。指纹入侵检测系统(IDS)提供了一种有希望解决此问题的新方法。通过表征来自已知ECU的网络流量,可以区分危险信息。在本文中,通过神经网络培训对网络流量的步骤响应和光谱表征,使用了修改版的指纹ID版本。通过添加功能集减少和超参数调整,此方法可实现99.4%的可信ECU流量检测率。
translated by 谷歌翻译
上肢控制和功能的丧失是中风后患者的不懈症状。这将使他们的日常生活活动施加艰辛。引入了超级机器人四肢(SRL)作为解决方案,以通过引入独立的新肢体来恢复损失的自由度(DOF)。 SRL中的致动系统可以分为刚性和软致动器。通过固有的安全性,成本和能源效率,软执行器已证明对刚性的刚性有利。但是,它们的刚度低,这危害了其准确性。可变的刚度执行器(VSA)是新开发的技术,已被证明可确保准确性和安全性。在本文中,我们介绍了基于可变刚度执行器的新型超级机器人肢。根据我们的知识,提议的概念验证SRL是第一个利用可变刚度执行器的人。开发的SRL将帮助中风后患者完成双重任务,例如用叉子和刀进食。说明了系统的建模,设计和实现。评估并通过预定义轨迹对其准确性进行了评估和验证。通过利用动量观察者进行碰撞检测来验证安全性,并通过软组织损伤测试评估了几种冲突后反应策略。通过标准的用户满意度问卷对援助过程进行定性验证。
translated by 谷歌翻译
深度学习和转移学习的进步为农业的各种自动化分类任务铺平了道路,包括植物疾病,害虫,杂草和植物物种检测。然而,农业自动化仍然面临各种挑战,例如数据集的大小和缺乏植物域特异性预处理模型。特定于域的预处理模型显示了各种计算机视觉任务的最先进的表现,包括面部识别和医学成像诊断。在本文中,我们提出了Agrinet数据集,该数据集是来自19个地理位置的160k农业图像的集合,几个图像标题为设备,以及423种以上的植物物种和疾病。我们还介绍了Agrinet模型,这是一组预处理的模型:VGG16,VGG19,Inception-V3,InceptionResnet-V2和Xception。 Agrinet-VGG19的分类准确性最高的94%,最高的F1分数为92%。此外,发现所有提出的模型都可以准确地对423种植物物种,疾病,害虫和杂草分类,而Inception-V3模型的精度最低为87%。与ImageNet相比,实验以评估Agrinet模型优势的实验在两个外部数据集上进行了模型:来自孟加拉国的害虫和植物疾病数据集和来自克什米尔的植物疾病数据集。
translated by 谷歌翻译
在本文中,提出了一种基于进发神经网络的方法来减少单眼视觉探针算法漂移的方法。视觉轨道图算法计算连续摄像机框架之间车辆的增量运动,然后集成这些增量以确定车辆的姿势。提出的神经网络减少了车辆的姿势估计中的误差,这是由于特征检测和匹配,摄像机固有参数等不准确而导致的。这些不准确性传播到对车辆的运动估计,从而导致大量估计误差。降低神经网络的漂移基于连续的摄像头框架中特征的运动来识别此类错误,从而导致更准确的增量运动估计值。使用KITTI数据集对拟议的漂移减少神经网络进行了训练和验证,结果表明,所提出的方法在减少增量方向估计中的误差方面的疗效,从而减少了姿势估计中的总体错误。
translated by 谷歌翻译
We tackle the problem of tracking the human lower body as an initial step toward an automatic motion assessment system for clinical mobility evaluation, using a multimodal system that combines Inertial Measurement Unit (IMU) data, RGB images, and point cloud depth measurements. This system applies the factor graph representation to an optimization problem that provides 3-D skeleton joint estimations. In this paper, we focus on improving the temporal consistency of the estimated human trajectories to greatly extend the range of operability of the depth sensor. More specifically, we introduce a new factor graph factor based on Koopman theory that embeds the nonlinear dynamics of several lower-limb movement activities. This factor performs a two-step process: first, a custom activity recognition module based on spatial temporal graph convolutional networks recognizes the walking activity; then, a Koopman pose prediction of the subsequent skeleton is used as an a priori estimation to drive the optimization problem toward more consistent results. We tested the performance of this module on datasets composed of multiple clinical lowerlimb mobility tests, and we show that our approach reduces outliers on the skeleton form by almost 1 m, while preserving natural walking trajectories at depths up to more than 10 m.
translated by 谷歌翻译
为了实现峰值预测性能,封路计优化(HPO)是机器学习的重要组成部分及其应用。在过去几年中,HPO的有效算法和工具的数量大幅增加。与此同时,社区仍缺乏现实,多样化,计算廉价和标准化的基准。这是多保真HPO方法的情况。为了缩短这个差距,我们提出了HPoBench,其中包括7个现有和5个新的基准家庭,共有100多个多保真基准问题。 HPobench允许以可重复的方式运行该可扩展的多保真HPO基准,通过隔离和包装容器中的各个基准。它还提供了用于计算实惠且统计数据的评估的代理和表格基准。为了展示HPoBench与各种优化工具的广泛兼容性,以及其有用性,我们开展了一个来自6个优化工具的13个优化器的示例性大规模研究。我们在这里提供HPobench:https://github.com/automl/hpobench。
translated by 谷歌翻译
人工智能(AI)见证了各种物联网(IoT)应用和服务的重大突破,从推荐系统到机器人控制和军事监视。这是由更容易访问感官数据的驱动以及生成实时数据流的Zettabytes(ZB)的普遍/普遍存在的设备的巨大范围。使用此类数据流来设计准确的模型,以预测未来的见解并彻底改变决策过程,将普遍的系统启动为有价值的范式,以实现更好的生活质量。普遍的计算和人工智能的汇合普遍AI的汇合将无处不在的物联网系统的作用从主要是数据收集到执行分布式计算,并具有集中学习的有希望的替代方案,带来了各种挑战。在这种情况下,应设想在物联网设备(例如智能手机,智能车辆)和基础架构(例如边缘节点和基站)之间进行明智的合作和资源调度,以避免跨越开销和计算计算并确保最大的性能。在本文中,我们对在普遍AI系统中克服这些资源挑战开发的最新技术进行了全面的调查。具体而言,我们首先介绍了普遍的计算,其架构以及与人工智能的相交。然后,我们回顾AI的背景,应用和性能指标,尤其是深度学习(DL)和在线学习,在无处不在的系统中运行。接下来,我们从算法和系统观点,分布式推理,培训和在线学习任务中,对物联网设备,边缘设备和云服务器的组合进行了分布式推理,培训和在线学习任务的深入文献综述。最后,我们讨论我们的未来愿景和研究挑战。
translated by 谷歌翻译